3 research outputs found
A Survey on Malware Detection with Graph Representation Learning
Malware detection has become a major concern due to the increasing number and
complexity of malware. Traditional detection methods based on signatures and
heuristics are used for malware detection, but unfortunately, they suffer from
poor generalization to unknown attacks and can be easily circumvented using
obfuscation techniques. In recent years, Machine Learning (ML) and notably Deep
Learning (DL) achieved impressive results in malware detection by learning
useful representations from data and have become a solution preferred over
traditional methods. More recently, the application of such techniques on
graph-structured data has achieved state-of-the-art performance in various
domains and demonstrates promising results in learning more robust
representations from malware. Yet, no literature review focusing on graph-based
deep learning for malware detection exists. In this survey, we provide an
in-depth literature review to summarize and unify existing works under the
common approaches and architectures. We notably demonstrate that Graph Neural
Networks (GNNs) reach competitive results in learning robust embeddings from
malware represented as expressive graph structures, leading to an efficient
detection by downstream classifiers. This paper also reviews adversarial
attacks that are utilized to fool graph-based detection methods. Challenges and
future research directions are discussed at the end of the paper.Comment: Preprint, submitted to ACM Computing Surveys on March 2023. For any
suggestions or improvements, please contact me directly by e-mai
A Benchmark of Graph Augmentations for Contrastive Learning-Based Network Attack Detection with Graph Neural Networks
Graph Neural Networks (GNNs) have recently emerged as powerful tools for detecting network attacks, due to their ability to capture complex relationships between hosts. However, acquiring labeled datasets in the cybersecurity domain is challenging. Consequently, efforts are directed towards learning representations directly from data using self-supervised approaches. In this study, we focus on contrastive methods that aim to maximize agreement between the original graph and positive graph augmentations, while minimizing agreement with negative graph augmentations. Our goal is to benchmark 10 augmentation techniques and provide more efficient augmentations for network data. We systematically evaluate 100 pairs of positive and negative graphs and present our findings in a table, highlighting the best-performing techniques. In particular, the experiments demonstrate that leveraging topological and attributive augmentations in the positive and negative graph generally improves performance, with up to 1.8\% and 2.2\% improvement in F1-score on two different datasets. The analysis further showcases the intrinsic connection between the performance of graph augmentations and the underlying data, highlighting the need for careful prior selection to achieve optimal results
A Benchmark of Graph Augmentations for Contrastive Learning-Based Network Attack Detection with Graph Neural Networks
Graph Neural Networks (GNNs) have recently emerged as powerful tools for detecting network attacks, due to their ability to capture complex relationships between hosts. However, acquiring labeled datasets in the cybersecurity domain is challenging. Consequently, efforts are directed towards learning representations directly from data using self-supervised approaches. In this study, we focus on contrastive methods that aim to maximize agreement between the original graph and positive graph augmentations, while minimizing agreement with negative graph augmentations. Our goal is to benchmark 10 augmentation techniques and provide more efficient augmentations for network data. We systematically evaluate 100 pairs of positive and negative graphs and present our findings in a table, highlighting the best-performing techniques. In particular, the experiments demonstrate that leveraging topological and attributive augmentations in the positive and negative graph generally improves performance, with up to 1.8\% and 2.2\% improvement in F1-score on two different datasets. The analysis further showcases the intrinsic connection between the performance of graph augmentations and the underlying data, highlighting the need for careful prior selection to achieve optimal results